skip to main content


Search for: All records

Creators/Authors contains: "Jayasuriya, Suren"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Light transport contains all light information between a light source and an image sensor. As an important application of light transport, dual photography has been a popular research topic, but it is challenged by long acquisition time, low signal-to-noise ratio, and the storage or processing of a large number of measurements. In this Letter, we propose a novel hardware setup that combines a flying-spot micro-electro mechanical system (MEMS) modulated projector with an event camera to implement dual photography for 3D scanning in both line-of-sight (LoS) and non-line-of-sight (NLoS) scenes with a transparent object. In particular, we achieved depth extraction from the LoS scenes and 3D reconstruction of the object in a NLoS scene using event light transport.

     
    more » « less
  2. Images captured from a long distance suffer from dynamic image distortion due to turbulent flow of air cells with random temperatures, and thus refractive indices. This phenomenon, known as image dancing, is commonly characterized by its refractive-index structure constantCn2as a measure of the turbulence strength. For many applications such as atmospheric forecast model, long-range/astronomy imaging, and aviation safety, optical communication technology,Cn2estimation is critical for accurately sensing the turbulent environment. Previous methods forCn2estimation include estimation from meteorological data (temperature, relative humidity, wind shear, etc.) for single-point measurements, two-ended pathlength measurements from optical scintillometer for path-averagedCn2, and more recently estimatingCn2from passive video cameras for low cost and hardware complexity. In this paper, we present a comparative analysis of classical image gradient methods forCn2estimation and modern deep learning-based methods leveraging convolutional neural networks. To enable this, we collect a dataset of video capture along with reference scintillometer measurements for ground truth, and we release this unique dataset to the scientific community. We observe that deep learning methods can achieve higher accuracy when trained on similar data, but suffer from generalization errors to other, unseen imagery as compared to classical methods. To overcome this trade-off, we present a novel physics-based network architecture that combines learned convolutional layers with a differentiable image gradient method that maintains high accuracy while being generalizable across image datasets.

     
    more » « less
  3. Extreme heat puts tremendous stress on human health and limits people’s ability to work, travel, and socialize outdoors. To mitigate heat in public spaces, thermal conditions must be assessed in the context of human exposure and space use. Mean Radiant Temperature (MRT) is an integrated radiation metric that quantifies the total heat load on the human body and is a driving parameter in many thermal comfort indices. Current sensor systems to measure MRT are expensive and bulky (6-directional setup) or slow and inaccurate (globe thermometers) and do not sense space use. This engineering systems paper introduces the hardware and software setup of a novel, low-cost thermal and visual sensing device (MaRTiny). The system collects meteorological data, concurrently counts the number of people in the shade and sun, and streams the results to an Amazon Web Services (AWS) server. MaRTiny integrates various micro-controllers to collect weather data relevant to human thermal exposure: air temperature, humidity, wind speed, globe temperature, and UV radiation. To detect people in the shade and Sun, we implemented state of the art object detection and shade detection models on an NVIDIA Jetson Nano. The system was tested in the field, showing that meteorological observations compared reasonably well to MaRTy observations (high-end human-biometeorological station) when both sensor systems were fully sun-exposed. To overcome potential sensing errors due to different exposure levels, we estimated MRT from MaRTiny weather observations using machine learning (SVM), which improved RMSE. This paper focuses on the development of the MaRTiny system and lays the foundation for fundamental research in urban climate science to investigate how people use public spaces under extreme heat to inform active shade management and urban design in cities. 
    more » « less
  4. null (Ed.)
    Abstract Background This paper explores the epistemologies and discourse of undergraduate students at the transdisciplinary intersection of engineering and the arts. Our research questions focus on the kinds of knowledge that students value, use, and identify within an interdisciplinary digital media program, as well as how they talk about using these epistemologies while navigating this transdisciplinary intersection. Six interviews were conducted with students pursuing a semester-long senior capstone project in the digital culture undergraduate degree program in the School of Arts, Media and Engineering at Arizona State University that emphasizes the intersection between arts, media, and engineering. Results Using deductive coding followed by discourse analysis, a variety of student epistemologies including positivism, constructionism, and pragmatism were observed. “Border epistemologies” are introduced as a way to think and/or construct knowledge with differing value across disciplines. Further, discourse analysis highlighted students’ identifications with being either an artist or an engineer and revealed linguistic choice in how students use knowledge and problem-solve in these situations. Conclusions Students in a digital media program use fluid, changing epistemological viewpoints when working on their projects, partly driven by orientations with arts and/or engineering. The findings from this study can lead to implications for the design and teaching of transdisciplinary capstones in the future. 
    more » « less
  5. Ensuring ideal lighting when recording videos of people can be a daunting task requiring a controlled environment and expensive equipment. Methods were recently proposed to perform portrait relighting for still images, enabling after-the-fact lighting enhancement. However, naively applying these methods on each frame independently yields videos plagued with flickering artifacts. In this work, we propose the first method to perform temporally consistent video portrait relighting. To achieve this, our method optimizes end-to-end both desired lighting and temporal consistency jointly. We do not require ground truth lighting annotations during training, allowing us to take advantage of the large corpus of portrait videos already available on the internet. We demonstrate that our method outperforms previous work in balancing accurate relighting and temporal consistency on a number of real-world portrait videos 
    more » « less
  6. Many computer vision problems face difficulties when imaging through turbulent refractive media (e.g., air and water) due to the refraction and scattering of light. These effects cause geometric distortion that requires either hand-crafted physical priors or supervised learning methods to remove. In this paper, we present a novel unsupervised network to recover the latent distortion-free image. The key idea is to model non-rigid distortions as deformable grids. Our network consists of a grid deformer that estimates the distortion field and an image generator that outputs the distortion-free image. By leveraging the positional encoding operator, we can simplify the network structure while maintaining fine spatial details in the recovered images. Our method doesn’t need to be trained on labeled data and has good transferability across various turbulent image datasets with different types of distortions. Extensive experiments on both simulated and real-captured turbulent images demonstrate that our method can remove both air and water distortions without much customization. 
    more » « less
  7. In the next 50 years, the rise of computing and artificial intelligence (AI) will transform our society and it is clear that students will be forced to engage with AI in their careers. Currently, the United States does not have the infrastructure or capacity in place to support the teaching of AI in the K-12 curriculum. To deal with the above challenges, we introduce the use of visual media as a key bridge technology to engage students in grades 6-8 with AI topics, through a recently NSF funded ITEST program, labeled ImageSTEAM. Specifically, we focus on the idea of a computational camera, which rethinks the sensing interface between the physical world and intelligent machines, and enables students to ponder how sensors and perception fundamentally will augment science and technology in the future. Our 1st set of workshops (summer 2021) with teachers and students were conducted virtually due to recent pandemic, and the results and experiences will be shared and discussed in the conference. 
    more » « less